听诊器录制的胸部声音为新生儿的偏远有氧呼吸健康监测提供了机会。然而,可靠的监控需要高质量的心脏和肺部声音。本文介绍了新生胸部声音分离的新型非负基质分子(NMF)和非负矩阵协同分解(NMCF)方法。为了评估这些方法并与现有的单源分离方法进行比较,产生人工混合物数据集,包括心脏,肺和噪音。然后计算用于这些人造混合物的信噪比。这些方法也在现实世界嘈杂的新生儿胸部声音上进行测试,并根据生命符号估计误差评估,并在我们以前的作品中发达1-5的信号质量得分。此外,评估所有方法的计算成本,以确定实时处理的适用性。总的来说,所提出的NMF和NMCF方法都以2.7db到11.6db的下一个最佳现有方法而言,对于人工数据集,0.40至1.12的现实数据集的信号质量改进。发现10S记录的声音分离的中值处理时间为NMCF和NMF的342ms为28.3。由于稳定且稳健的性能,我们认为我们的提出方法可用于在真实的环境中弃绝新生儿心脏和肺部。提出和现有方法的代码可以在:https://github.com/egrooby-monash/heart-and-lung-sound-eparation。
translated by 谷歌翻译
目的:确定逼真,但是电磁图的计算上有效模型可用于预先列车,具有广泛的形态和特定于给定条件的形态和异常 - T波段(TWA)由于创伤后应激障碍,或重点 - 在稀有人的小型数据库上显着提高了性能。方法:使用先前经过验证的人工ECG模型,我们生成了180,000人的人工ECG,有或没有重要的TWA,具有不同的心率,呼吸率,TWA幅度和ECG形态。在70,000名患者中培训的DNN进行分类为25种不同的节奏,将输出层修改为二进制类(TWA或NO-TWA,或等效,PTSD或NO-PTSD),并对人工ECG进行转移学习。在最终转移学习步骤中,DNN在ECG的培训和交叉验证,从12个PTE和24个控件,用于使用三个数据库的所有组合。主要结果:通过进行转移学习步骤,使用预先培训的心律失常DNN,人工数据和真实的PTSD相关的心电图数据,发现了最佳性能的方法(AUROC = 0.77,精度= 0.72,F1-SCATE = 0.64) 。从训练中删除人工数据导致性能的最大下降。从培训中取出心律失常数据提供了适度但重要的,表现下降。最终模型在人工数据上显示出在性能下没有显着下降,表明没有过度拟合。意义:在医疗保健中,通常只有一小部分高质量数据和标签,或更大的数据库,质量较低(和较差的相关)标签。这里呈现的范式,涉及基于模型的性能提升,通过在大型现实人工数据库和部分相关的真实数据库上传输学习来提供解决方案。
translated by 谷歌翻译
胎儿心电图(FECG)首先在20世纪初从母体腹表面记录。在过去的五十年中,最先进的电子技术和信号处理算法已被用于将非侵入性胎儿心电图转化为可靠的胎儿心脏监测技术。在本章中,已经对来自非侵入性母亲腹部录像进行了建模,提取和分析的主要信号处理技术,并详细介绍了来自非侵入性母亲腹部录像的型号的建模,提取和分析。本章的主要主题包括:1)FECG的电生理学从信号处理视点,2)母体体积传导介质的数学模型和从体表的FECG的波形模型,3)信号采集要求,4)基于模型的FECG噪声和干扰取消的技术,包括自适应滤波器和半盲源分离技术,以及5)胎儿运动跟踪和在线FECG提取的最近算法的进步。
translated by 谷歌翻译
心脏听诊是用于检测和识别许多心脏病的最具成本效益的技术之一。基于Auscultation的计算机辅助决策系统可以支持他们的决定中的医生。遗憾的是,在临床试验中的应用仍然很小,因为它们中的大多数仅旨在检测音盲局部信号中的额外或异常波的存在,即,仅提供二进制地面真理变量(普通VS异常)。这主要是由于缺乏大型公共数据集,其中存在对这种异常波(例如,心脏杂音)的更详细描述。为基于听诊的医疗建议系统铺平了更有效的研究,我们的团队准备了目前最大的儿科心声数据集。从1568名患者的四个主要听诊位置收集了5282个录音,在此过程中,手动注释了215780人的心声。此外,并且首次通过专家注释器根据其定时,形状,俯仰,分级和质量来手动注释每个心脏杂音。此外,鉴定了杂音的听诊位置以及杂音更集中检测到杂音的位置位置。对于相对大量的心脏声音的这种详细描述可以为新机器学习算法铺平道路,该算法具有真实世界的应用,用于检测和分析诊断目的的杂波。
translated by 谷歌翻译
Compared to regular cameras, Dynamic Vision Sensors or Event Cameras can output compact visual data based on a change in the intensity in each pixel location asynchronously. In this paper, we study the application of current image-based SLAM techniques to these novel sensors. To this end, the information in adaptively selected event windows is processed to form motion-compensated images. These images are then used to reconstruct the scene and estimate the 6-DOF pose of the camera. We also propose an inertial version of the event-only pipeline to assess its capabilities. We compare the results of different configurations of the proposed algorithm against the ground truth for sequences of two publicly available event datasets. We also compare the results of the proposed event-inertial pipeline with the state-of-the-art and show it can produce comparable or more accurate results provided the map estimate is reliable.
translated by 谷歌翻译
The purpose of this work was to tackle practical issues which arise when using a tendon-driven robotic manipulator with a long, passive, flexible proximal section in medical applications. A separable robot which overcomes difficulties in actuation and sterilization is introduced, in which the body containing the electronics is reusable and the remainder is disposable. A control input which resolves the redundancy in the kinematics and a physical interpretation of this redundancy are provided. The effect of a static change in the proximal section angle on bending angle error was explored under four testing conditions for a sinusoidal input. Bending angle error increased for increasing proximal section angle for all testing conditions with an average error reduction of 41.48% for retension, 4.28% for hysteresis, and 52.35% for re-tension + hysteresis compensation relative to the baseline case. Two major sources of error in tracking the bending angle were identified: time delay from hysteresis and DC offset from the proximal section angle. Examination of these error sources revealed that the simple hysteresis compensation was most effective for removing time delay and re-tension compensation for removing DC offset, which was the primary source of increasing error. The re-tension compensation was also tested for dynamic changes in the proximal section and reduced error in the final configuration of the tip by 89.14% relative to the baseline case.
translated by 谷歌翻译
Transformers have recently gained attention in the computer vision domain due to their ability to model long-range dependencies. However, the self-attention mechanism, which is the core part of the Transformer model, usually suffers from quadratic computational complexity with respect to the number of tokens. Many architectures attempt to reduce model complexity by limiting the self-attention mechanism to local regions or by redesigning the tokenization process. In this paper, we propose DAE-Former, a novel method that seeks to provide an alternative perspective by efficiently designing the self-attention mechanism. More specifically, we reformulate the self-attention mechanism to capture both spatial and channel relations across the whole feature dimension while staying computationally efficient. Furthermore, we redesign the skip connection path by including the cross-attention module to ensure the feature reusability and enhance the localization power. Our method outperforms state-of-the-art methods on multi-organ cardiac and skin lesion segmentation datasets without requiring pre-training weights. The code is publicly available at https://github.com/mindflow-institue/DAEFormer.
translated by 谷歌翻译
A track-before-detect (TBD) particle filter-based method for detection and tracking of low observable objects based on a sequence of image frames in the presence of noise and clutter is studied. At each time instance after receiving a frame of image, first, some preprocessing approaches are applied to the image. Then, it is sent to the detection and tracking algorithm which is based on a particle filter. Performance of the approach is evaluated for detection and tracking of an object in different scenarios including noise and clutter.
translated by 谷歌翻译
Machine reading comprehension (MRC) is a long-standing topic in natural language processing (NLP). The MRC task aims to answer a question based on the given context. Recently studies focus on multi-hop MRC which is a more challenging extension of MRC, which to answer a question some disjoint pieces of information across the context are required. Due to the complexity and importance of multi-hop MRC, a large number of studies have been focused on this topic in recent years, therefore, it is necessary and worth reviewing the related literature. This study aims to investigate recent advances in the multi-hop MRC approaches based on 31 studies from 2018 to 2022. In this regard, first, the multi-hop MRC problem definition will be introduced, then 31 models will be reviewed in detail with a strong focus on their multi-hop aspects. They also will be categorized based on their main techniques. Finally, a fine-grain comprehensive comparison of the models and techniques will be presented.
translated by 谷歌翻译
Multi-hop Machine reading comprehension is a challenging task with aim of answering a question based on disjoint pieces of information across the different passages. The evaluation metrics and datasets are a vital part of multi-hop MRC because it is not possible to train and evaluate models without them, also, the proposed challenges by datasets often are an important motivation for improving the existing models. Due to increasing attention to this field, it is necessary and worth reviewing them in detail. This study aims to present a comprehensive survey on recent advances in multi-hop MRC evaluation metrics and datasets. In this regard, first, the multi-hop MRC problem definition will be presented, then the evaluation metrics based on their multi-hop aspect will be investigated. Also, 15 multi-hop datasets have been reviewed in detail from 2017 to 2022, and a comprehensive analysis has been prepared at the end. Finally, open issues in this field have been discussed.
translated by 谷歌翻译